Machine learning model development and optimisation can be a rather cumbersome and resource-intensive process. Custom models are often more difficult to build and deploy, and they require infrastructure and expertise which are often costly to acquire and maintain. Machine learning product development lifecycle must take into account the need to navigate the difficulties of developing and deploying machine learning models. evoML is an AI-powered tool that provides automated functionalities in machine learning model development, optimisation, and model code optimisation. Core functionalities of evoML include data cleaning, exploratory analysis, feature analysis and generation, model optimisation, model evaluation, model code optimisation, and model deployment. Additionally, a key feature of evoML is that it embeds code and model optimisation into the model development process, and includes multi-objective optimisation capabilities.
translated by 谷歌翻译
Recent object detection models for infrared (IR) imagery are based upon deep neural networks (DNNs) and require large amounts of labeled training imagery. However, publicly-available datasets that can be used for such training are limited in their size and diversity. To address this problem, we explore cross-modal style transfer (CMST) to leverage large and diverse color imagery datasets so that they can be used to train DNN-based IR image based object detectors. We evaluate six contemporary stylization methods on four publicly-available IR datasets - the first comparison of its kind - and find that CMST is highly effective for DNN-based detectors. Surprisingly, we find that existing data-driven methods are outperformed by a simple grayscale stylization (an average of the color channels). Our analysis reveals that existing data-driven methods are either too simplistic or introduce significant artifacts into the imagery. To overcome these limitations, we propose meta-learning style transfer (MLST), which learns a stylization by composing and tuning well-behaved analytic functions. We find that MLST leads to more complex stylizations without introducing significant image artifacts and achieves the best overall detector performance on our benchmark datasets.
translated by 谷歌翻译
Representing and reasoning about uncertainty is crucial for autonomous agents acting in partially observable environments with noisy sensors. Partially observable Markov decision processes (POMDPs) serve as a general framework for representing problems in which uncertainty is an important factor. Online sample-based POMDP methods have emerged as efficient approaches to solving large POMDPs and have been shown to extend to continuous domains. However, these solutions struggle to find long-horizon plans in problems with significant uncertainty. Exploration heuristics can help guide planning, but many real-world settings contain significant task-irrelevant uncertainty that might distract from the task objective. In this paper, we propose STRUG, an online POMDP solver capable of handling domains that require long-horizon planning with significant task-relevant and task-irrelevant uncertainty. We demonstrate our solution on several temporally extended versions of toy POMDP problems as well as robotic manipulation of articulated objects using a neural perception frontend to construct a distribution of possible models. Our results show that STRUG outperforms the current sample-based online POMDP solvers on several tasks.
translated by 谷歌翻译
In this paper, we examine the problem of visibility-aware robot navigation among movable obstacles (VANAMO). A variant of the well-known NAMO robotic planning problem, VANAMO puts additional visibility constraints on robot motion and object movability. This new problem formulation lifts the restrictive assumption that the map is fully visible and the object positions are fully known. We provide a formal definition of the VANAMO problem and propose the Look and Manipulate Backchaining (LaMB) algorithm for solving such problems. LaMB has a simple vision-based API that makes it more easily transferable to real-world robot applications and scales to the large 3D environments. To evaluate LaMB, we construct a set of tasks that illustrate the complex interplay between visibility and object movability that can arise in mobile base manipulation problems in unknown environments. We show that LaMB outperforms NAMO and visibility-aware motion planning approaches as well as simple combinations of them on complex manipulation problems with partial observability.
translated by 谷歌翻译
Correctly recognizing the behaviors of children with Autism Spectrum Disorder (ASD) is of vital importance for the diagnosis of Autism and timely early intervention. However, the observation and recording during the treatment from the parents of autistic children may not be accurate and objective. In such cases, automatic recognition systems based on computer vision and machine learning (in particular deep learning) technology can alleviate this issue to a large extent. Existing human action recognition models can now achieve persuasive performance on challenging activity datasets, e.g. daily activity, and sports activity. However, problem behaviors in children with ASD are very different from these general activities, and recognizing these problem behaviors via computer vision is less studied. In this paper, we first evaluate a strong baseline for action recognition, i.e. Video Swin Transformer, on two autism behaviors datasets (SSBD and ESBD) and show that it can achieve high accuracy and outperform the previous methods by a large margin, demonstrating the feasibility of vision-based problem behaviors recognition. Moreover, we propose language-assisted training to further enhance the action recognition performance. Specifically, we develop a two-branch multimodal deep learning framework by incorporating the "freely available" language description for each type of problem behavior. Experimental results demonstrate that incorporating additional language supervision can bring an obvious performance boost for the autism problem behaviors recognition task as compared to using the video information only (i.e. 3.49% improvement on ESBD and 1.46% on SSBD).
translated by 谷歌翻译
近年来,合成(或模拟)数据用于培训机器学习模型已迅速增长。通常,合成数据可以比其现实世界中的对应物更快,更便宜。但是,使用合成图像的一个挑战是场景设计:例如,内容及其特征和空间布置的选择。为了有效,该设计不仅必须现实,而且适合目标域,而目标域(通过假设)是未标记的。在这项工作中,我们提出了一种方法,可以自动根据未标记的现实世界图像选择合成图像的设计。我们的方法被称为神经 - 异位元模拟(NAM),建立在开创性的元模拟方法上。与当前的最新方法相反,我们的方法可以在离线后进行预训练,然后为新目标图像提供快速的设计推断。使用合成和现实世界中的问题,我们表明,NAMS不符合符合内域和室外目标成像的合成设计,并且具有NAMS设计的图像的训练分割模型与NA \ \ na \'相比,结果均优异。 IVE随机设计和最先进的元模拟方法。
translated by 谷歌翻译
自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
在环境抽象中进行高级搜索来指导低水平决策,这是一种有效的方法,是解决连续状态和行动空间中的长途任务的有效方法。最近的工作表明,可以以符号操作员和神经采样器的形式学习使这种二聚体计划的动作抽象,并且鉴于实现已知目标的符号谓词和演示。在这项工作中,我们表明,在动作往往会导致大量谓词发生变化的环境中,现有的方法不足。为了解决这个问题,我们建议学习具有忽略效果的操作员。激发我们方法的关键思想是,对谓词的每一个观察到的变化进行建模是不必要的。唯一需要建模的更改是高级搜索以实现指定目标所需的更改。在实验上,我们表明我们的方法能够学习具有忽略六个混合机器人域效果的操作员,这些企业能够解决一个代理,以解决具有不同初始状态,目标和对象数量的新任务变化,比几个基线要高得多。
translated by 谷歌翻译
通过一系列联邦举措和命令,美国政府一直在努力确保美国在AI中的领导。这些广泛的战略文件影响了美国空军美国部(DAF)等组织。DAF-MIT AI加速器是DAF和MIT之间的一项计划,以弥合AI研究人员与DAF任务要求之间的差距。DAF-MIT AI加速器支持的几个项目正在开发公共挑战问题,这些问题解决了许多联邦AI研究的重点。这些挑战是通过公开可用的大型AI-Ready数据集,激励开源解决方案,并为可以激发进一步研究的双重使用技术创建需求信号,来针对优先事项。在本文中,我们描述了正在开发的这些公共挑战以及它们的应用如何促进科学进步。
translated by 谷歌翻译
在具有连续以对象的状态,连续的动作,长距离和稀疏反馈的机器人环境中,决策是具有挑战性的。诸如任务和运动计划(TAMP)之类的层次结构方法通过将决策分解为两个或更多级别的抽象来解决这些挑战。在给出演示和符号谓词的环境中,先前的工作已经显示了如何通过手动设计的参数化策略来学习符号操作员和神经采样器。我们的主要贡献是一种与操作员和采样器结合使用的参数化策略的方法。这些组件被包装到模块化神经符号技能中,并与搜索 - 然后样本tamp一起测序以解决新任务。在四个机器人域的实验中,我们表明我们的方法 - 具有神经符号技能的双重计划 - 可以解决具有不同初始状态,目标和对象不同的各种任务,表现优于六个基线和消融。视频:https://youtu.be/pbfzp8rpugg代码:https://tinyurl.com/skill-learning
translated by 谷歌翻译